94 research outputs found

    Dense Associative Memory for Pattern Recognition

    Full text link
    A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.Comment: Accepted for publication at NIPS 201

    Dense Associative Memory is Robust to Adversarial Inputs

    Full text link
    Deep neural networks (DNN) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation, so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNN and humans classify patterns, and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our paper examines these questions within the framework of Dense Associative Memory (DAM) models. These models are defined by the energy function, with higher order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units (ReLU), fail to transfer to and fool the models with higher order interactions. This opens up a possibility to use higher order models for detecting and stopping malicious adversarial attacks. The presented results suggest that DAM with higher order energy functions are closer to human visual perception than DNN with ReLUs

    Earthquake cycles and neural reverberations

    Get PDF
    Driven systems of interconnected blocks with stick-slip friction capture main features of earthquake processes. The microscopic dynamics closely resemble those of spiking nerve cells. We analyze the differences in the collective behavior and introduce a class of solvable models. We prove that the models exhibit rapid phase locking, a phenomenon of particular interest to both geophysics and neurobiology. We study the dependence upon initial conditions and system parameters, and discuss implications for earthquake modeling and neural computation

    Neural network computation by in vitro transcriptional circuits

    Get PDF
    The structural similarity of neural networks and genetic regulatory networks to digital circuits, and hence to each other, was noted from the very beginning of their study [1, 2]. In this work, we propose a simple biochemical system whose architecture mimics that of genetic regulation and whose components allow for in vitro implementation of arbitrary circuits. We use only two enzymes in addition to DNA and RNA molecules: RNA polymerase (RNAP) and ribonuclease (RNase). We develop a rate equation for in vitro transcriptional networks, and derive a correspondence with general neural network rate equations [3]. As proof-of-principle demonstrations, an associative memory task and a feedforward network computation are shown by simulation. A difference between the neural network and biochemical models is also highlighted: global coupling of rate equations through enzyme saturation can lead to global feedback regulation, thus allowing a simple network without explicit mutual inhibition to perform the winner-take-all computation. Thus, the full complexity of the cell is not necessary for biochemical computation: a wide range of functional behaviors can be achieved with a small set of biochemical components

    Unsupervised Learning by Competing Hidden Units

    Full text link
    It is widely believed that the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility, and which is motivated by Hebb's idea that change of the synapse strength should be local - i.e. should depend only on the activities of the pre and post synaptic neurons. We design a learning algorithm that utilizes global inhibition in the hidden layer, and is capable of learning early feature detectors in a completely unsupervised way. These learned lower layer feature detectors can be used to train higher layer weights in a usual supervised way so that the performance of the full network is comparable to the performance of standard feedforward networks trained end-to-end with a backpropagation algorithm

    Rapid, parallel path planning by propagating wavefronts of spiking neural activity

    Get PDF
    Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware

    Light propagation in atomic Mott Insulators

    Get PDF
    We study radiation-matter interaction in a system of ultracold atoms trapped in an optical lattice in a Mott insulator phase. We develop a fully general quantum model, and we perform calculations for a one-dimensional geometry at normal incidence. Both two- and three-level Λ\Lambda atomic configurations are studied. The polariton dispersion and the reflectivity spectra are characterized in the different regimes, for both semi-infinite and finite-size geometries. We apply this model to propose a photon energy lifter experiment: a device which is able to shift the carrier frequency of a slowly travelling wavepacket without affecting the pulse shape nor its coherence

    Three-dimensional quantization of the electromagnetic field in dispersive and absorbing inhomogeneous dielectrics

    Full text link
    A quantization scheme for the phenomenological Maxwell theory of the full electromagnetic field in an inhomogeneous three-dimensional, dispersive and absorbing dielectric medium is developed. The classical Maxwell equations with spatially varying and Kramers-Kronig consistent permittivity are regarded as operator-valued field equations, introducing additional current- and charge-density operator fields in order to take into account the noise associated with the dissipation in the medium. It is shown that the equal-time commutation relations between the fundamental electromagnetic fields E^\hat E and B^\hat B and the potentials A^\hat A and ϕ^\hat \phi in the Coulomb gauge can be expressed in terms of the Green tensor of the classical problem. From the Green tensors for bulk material and an inhomogeneous medium consisting of two bulk dielectrics with a common planar interface it is explicitly proven that the well-known equal-time commutation relations of QED are preserved
    corecore